疟疾,一种致命但可治愈的疾病每年索赔数十万人生命。早期和正确的诊断对于避免健康复杂性至关重要,但这取决于昂贵的显微镜和培训专家分析血液涂抹幻灯片的可用性。基于深度学习的方法可能不仅可以降低专家的负担,而且还提高了低成本显微镜的诊断准确性。但是,由于没有合理的大小数据集,这是阻碍的。最具挑战性的方面之一是专家不愿意在低成本显微镜下以低放大率注释数据集。我们提出了一种数据集,以进一步研究低放大率低成本显微镜的疟疾显微镜。我们的大型数据集由来自几种疟疾感染患者的血液涂抹幻灯片的图像组成,通过显微镜在两种不同的成本谱和多个放大倍数中收集。用于在高放大率下通过高成本显微镜收集的图像的定位和寿命分类任务的疟原虫细胞。我们设计了一种机制,将这些注释从高倍率从高倍率转移到低成本显微镜,多倍放大。多个对象探测器和域适配方法作为基准。此外,引入了部分监督的域适配方法以使对象检测器适应从低成本显微镜收集的图像上的工作。该数据集将在发布后公开可用。
translated by 谷歌翻译
In post-covid19 world, radio frequency (RF)-based non-contact methods, e.g., software-defined radios (SDR)-based methods have emerged as promising candidates for intelligent remote sensing of human vitals, and could help in containment of contagious viruses like covid19. To this end, this work utilizes the universal software radio peripherals (USRP)-based SDRs along with classical machine learning (ML) methods to design a non-contact method to monitor different breathing abnormalities. Under our proposed method, a subject rests his/her hand on a table in between the transmit and receive antennas, while an orthogonal frequency division multiplexing (OFDM) signal passes through the hand. Subsequently, the receiver extracts the channel frequency response (basically, fine-grained wireless channel state information), and feeds it to various ML algorithms which eventually classify between different breathing abnormalities. Among all classifiers, linear SVM classifier resulted in a maximum accuracy of 88.1\%. To train the ML classifiers in a supervised manner, data was collected by doing real-time experiments on 4 subjects in a lab environment. For label generation purpose, the breathing of the subjects was classified into three classes: normal, fast, and slow breathing. Furthermore, in addition to our proposed method (where only a hand is exposed to RF signals), we also implemented and tested the state-of-the-art method (where full chest is exposed to RF radiation). The performance comparison of the two methods reveals a trade-off, i.e., the accuracy of our proposed method is slightly inferior but our method results in minimal body exposure to RF radiation, compared to the benchmark method.
translated by 谷歌翻译
Cross-view geo-localization aims to estimate the location of a query ground image by matching it to a reference geo-tagged aerial images database. As an extremely challenging task, its difficulties root in the drastic view changes and different capturing time between two views. Despite these difficulties, recent works achieve outstanding progress on cross-view geo-localization benchmarks. However, existing methods still suffer from poor performance on the cross-area benchmarks, in which the training and testing data are captured from two different regions. We attribute this deficiency to the lack of ability to extract the spatial configuration of visual feature layouts and models' overfitting on low-level details from the training set. In this paper, we propose GeoDTR which explicitly disentangles geometric information from raw features and learns the spatial correlations among visual features from aerial and ground pairs with a novel geometric layout extractor module. This module generates a set of geometric layout descriptors, modulating the raw features and producing high-quality latent representations. In addition, we elaborate on two categories of data augmentations, (i) Layout simulation, which varies the spatial configuration while keeping the low-level details intact. (ii) Semantic augmentation, which alters the low-level details and encourages the model to capture spatial configurations. These augmentations help to improve the performance of the cross-view geo-localization models, especially on the cross-area benchmarks. Moreover, we propose a counterfactual-based learning process to benefit the geometric layout extractor in exploring spatial information. Extensive experiments show that GeoDTR not only achieves state-of-the-art results but also significantly boosts the performance on same-area and cross-area benchmarks.
translated by 谷歌翻译
为了实现不断增长的准确性,通常会开发大型和复杂的神经网络。这样的模型需要高度的计算资源,因此不能在边缘设备上部署。由于它们在几个应用领域的有用性,建立资源有效的通用网络非常感兴趣。在这项工作中,我们努力有效地结合了CNN和变压器模型的优势,并提出了一种新的有效混合体系结构。特别是在EDGENEXT中,我们引入了分裂深度转置注意力(SDTA)编码器,该编码器将输入张量分解为多个通道组,并利用深度旋转以及跨通道维度的自我注意力,以隐含地增加接受场并编码多尺度特征。我们在分类,检测和分割任务上进行的广泛实验揭示了所提出的方法的优点,优于相对较低的计算要求的最先进方法。我们具有130万参数的EDGENEXT模型在Imagenet-1k上达到71.2 \%TOP-1的精度,超过移动设备的绝对增益为2.2 \%,而拖鞋减少了28 \%。此外,我们具有560万参数的EDGENEXT模型在Imagenet-1k上达到了79.4 \%TOP-1的精度。代码和模型可在https://t.ly/_vu9上公开获得。
translated by 谷歌翻译
地理定位的概念是指确定地球上的某些“实体”的位置的过程,通常使用全球定位系统(GPS)坐标。感兴趣的实体可以是图像,图像序列,视频,卫星图像,甚至图像中可见的物体。由于GPS标记媒体的大规模数据集由于智能手机和互联网而迅速变得可用,而深入学习已经上升以提高机器学习模型的性能能力,因此由于其显着影响而出现了视觉和对象地理定位的领域广泛的应用,如增强现实,机器人,自驾驶车辆,道路维护和3D重建。本文提供了对涉及图像的地理定位的全面调查,其涉及从捕获图像(图像地理定位)或图像内的地理定位对象(对象地理定位)的地理定位的综合调查。我们将提供深入的研究,包括流行算法的摘要,对所提出的数据集的描述以及性能结果的分析来说明每个字段的当前状态。
translated by 谷歌翻译
凭借其恶劣天气条件和测量速度的能力,雷达传感器已经成为汽车景观的一部分超过二十年的鲁棒性。最近的高清晰度(HD)成像雷达的进展使角分辨率低于程度,从而接近激光扫描性能。然而,数据量为HD雷达提供和计算成本来估计角度位置仍然是一个挑战。在本文中,我们提出了一种新颖的高清雷达传感模型FFT-RADNET,其消除了计算范围 - 方位角多普勒3D张量的开销,从而从范围多普勒频谱恢复角度。 FFT-RADNET培训均以检测车辆和分段免费驾驶空间。在两个任务中,它与最新的基于雷达的模型竞争,同时需要更少的计算和内存。此外,我们在各种环境(城市街道,公路,农村路)中,从同步汽车级传感器(相机,激光,高清雷达)收集和注释了2小时的原始数据。这个独特的数据集,“雷达,lidar等人”的inc-命名的radial是在https://github.com/valeoai/radial上获得的。
translated by 谷歌翻译
由于卷积神经网络(CNNS)在从大规模数据中进行了学习的可概括图像前沿执行井,因此这些模型已被广泛地应用于图像恢复和相关任务。最近,另一类神经架构,变形金刚表现出对自然语言和高级视觉任务的显着性能。虽然变压器模型减轻了CNNS的缺点(即,有限的接收领域并对输入内容而无关),但其计算复杂性以空间分辨率二次大转,因此可以对涉及高分辨率图像的大多数图像恢复任务应用得不可行。在这项工作中,我们通过在构建块(多头关注和前锋网络)中进行多个关键设计,提出了一种有效的变压器模型,使得它可以捕获远程像素相互作用,同时仍然适用于大图像。我们的模型,命名恢复变压器(RESTORMER),实现了最先进的结果,导致几种图像恢复任务,包括图像派生,单图像运动脱棕,散焦去纹(单图像和双像素数据)和图像去噪(高斯灰度/颜色去噪,真实的图像去噪)。源代码和预先训练的型号可在https://github.com/swz30/restormer上获得。
translated by 谷歌翻译
深度神经网络已经显示出使用医学图像数据的疾病检测和分类结果。然而,他们仍然遭受处理真实世界场景的挑战,特别是可靠地检测分配(OOD)样本。我们提出了一种方法来强化皮肤和疟疾样本的ood样本,而无需在训练期间获得标记的OOD样品。具体而言,我们使用度量学习以及Logistic回归来强制深度网络学习众多丰富的类代表功能。要指导对OOD示例的学习过程,我们通过删除图像或置换图像部件中的类特定的突出区域并远离分布式样本来生成ID类似的示例。在推理时间期间,用于检测分布外样品的K +互易邻居。对于皮肤癌ood检测,我们使用两个标准基准皮肤癌症ISIC数据集AS ID,六种不同的数据集具有不同难度水平的数据集被视为出于分配。对于疟疾检测,我们使用BBBC041 Malaria DataSet作为ID和五个不同的具有挑战性的数据集,如分销。我们在先前的先前皮肤癌和疟疾OOD检测中,我们在TNR @ TPR95%中提高了最先进的结果,改善了5%和4%。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译